Hacker News new | past | comments | ask | show | jobs | submit login
Alan Kay: Is it too late to create a healthy future? [video] (nih.gov)
272 points by tosh 3 months ago | hide | past | web | favorite | 100 comments



> "It's not doing more of what we are doing and it means if we've done things with technology, for example, that have gotten us into a bit of a pickle, doing more things with technology at the same level of thinking is probably going to make things worse."

Boy if I could shout this from the rooftops to all the AI "luminaries" or people that think AI is going to deliver us to some utopian future where work is now unnecessary and we are all free to "chase our passions" (and along with it, this junk notion that UBI is somehow now fundamentally required to continue society), I would. This wisdom is greatly needed for all the current "future techies" out there.


I don't know if AI luminaries advocating certain utopia exist. Pretty much all reasonable voices I've seen about AI in the past decade either dismissed it entirely, or advocated caution, arguing that it's a small chance of utopia if we do it right, and high chance of destroying humanity.

As for UBI, it's one way of addressing the fact that our civilization is mature enough to abandon the concept of having to work to be able to live (not to mention, non-bullshit jobs are steadily disappearing).


> our civilization is mature enough to abandon the concept of having to work to be able to live

Or at least make the concept universal. It's been alive and well for the ruling classes for centuries.


The ruling class works too. They rule.


Some do, some don't, in my experience (there are many scions of the ruling class who choose to do little).

But that's all irrelevant in any case, because the issue in question here wasn't working or not-working, but having to work to be permitted to live. Most of us have to. The ruling classes don't.


Rulers have to work to maintain or expand, otherwise they decline.


Hey, are you from the ruling classes?


No, but I've had a pretty close view of their undersides.


A Utopia doesn’t change that you’re human with ups and downs.


Don't project your conditions onto everyone else, please.


Some people are hungry.


If I ever see something even approaching generalized intelligence, I might be slightly concerned.

But, thus far, our best AIs are simply pattern recognition systems strung together like Christmas lights.

There's also no reason to think that 'intelligence' can be exponentially scaled or that any system can recognize how to improve itself -- or have the capacity to do so in situ.


> But, thus far, our best AIs are simply pattern recognition systems strung together like Christmas lights.

Completely agree. I do believe calling current run of ML/DNNs "AI" is stretching it (and I expect to be hit with the "AI is a moving goalpost" argument but seriously? is it a moving goalpost for those who know how the current generation of solutions work?).

> There's also no reason to think that 'intelligence' can be exponentially scaled or that any system can recognize how to improve itself -- or have the capacity to do so in situ.

There's no reason to assume otherwise. Exponential scaling (i.e. "intelligence explosion") may be a contestable assumption, but really, we're in big trouble if a general AI reaches human levels. We know this level is possible because we exist. Even if AI did just that, we'd have to contest with a new sapient being, and give it ethical consideration. But I see nothing preventing that AI from being as smart as 10 or 100 humans combined.

The rest really follows from nature of technological artifacts - if you can make one, you can make one thousand (applies to both hardware and software) and make it work as one; you can iterate on it, and if you can, then a human-level AI will be able to do it too.


Unless we are examples of such a system.


So, 3.5 billion years of organic computer evolution from now is when we should begin to worry? ;-)


Our technology is several orders of magnitude faster than biological evolution :). Things on Earth have stopped being primarily driven by genes some time ago now.


You are literally describing ourselves.


We are no-where near rich enough to afford UBI at this point. Run the numbers, Even if you taxed every last billionaire at 100%, you'd be nowhere near enough money for even the most basic UBI.

Who knows, maybe in a 1000 years? i don't see it happening in our lifetimes.

And what are all those AIs supposed to be working on? Based on the direction it's currentlying heading, it looks like we'll just be getting more the same stuff we don't need.

If AI is going to help humanity, it needs to be applied to maslow's hierarchy of needs. 99% of society still isn't past the first row. Look at the bottom row, it's pretty simple: Shelter, Food, Water and by extension to afford those things: transportation and medical insurance.


Who are these luminaries you speak of? I know Neil degrasse Tyson thinks AI is going to be beneficial, but I don't know who else takes that position. Most of the "experts" who are quoted on AI, not those scientists and engineers actually building the AI, but those "expert philosophers" like Sam Harris who assume their prosaic musings can capture the future of technological innovation, they are mostly gloomy or neutral on the subject. So, who's out there actually building AI and declaring it'll lead to utopia?


Elon Musk. Google/Waymo. Uber. Every car company working on "selfdriving." Amazon. Etc. Literally everyone in tech is working on an AI-powered pipedream.


Zuckerburg is another popular figure who is optimistic about AI.


that... is probably more concerning than anything else in this topic


"popular figure" -> "public figure"... is probably more correct these days.


Yea I intended well-known, not well-liked


History proves otherwise. Technology crammed people into cities, which was terrible until it allowed us to create better infrastructure to cope with the density. Technology made London air unbreathable until it gave rise to cleaner sources of energy which we could place further away from people. Technology often times makes things worse, and then helps us fix the problems that arise. And the net result is longer, healthier lives for the vast majority of people on the planet. So I say "keep going"!


History isn't done yet. Your rosy prognosis appears to ignore current widespread ecosystem collapse and mass extinction, growing geopolitical instability, and the looming possibility of the biosphere becoming incompatible with human civilization as it is currently implemented.


Humanity is indeed driving towards a cliff and the current political and economic system globally is not sustainable, you'll have no argument there. Where we differ is that I see a thread throughout history, especially recently, that we tend to invent our way to solutions. History is indeed not done yet, so how about neither of us call the game just yet?


Unfortunately I feel the same way and it is hard to find any sort of happiness with this mindset. How do you handle?


For me, I try to educate myself and continue to open my heart and mind to the suffering and problems of the world. This is painful and only recommended in small doses. I see a lot of suffering, and not just human suffering, and I am a white male human in Colorado, so I probably don't see the majority of suffering due to my privileges and luxurious life. The only other option is to turn away from it all, and I don't want to. I don't want to be part of the problem and live with so much blood on my hands. I try to cope with it by waking up each day and trying to make a positive difference in any small way that I can. Learning, sharing what I learn, talking with others, learning the problems of others, volunteering social services, voting, and brainstorming economic, social and political solutions. I find happiness daily in the beauty of the sky or animals or plants. This is truly an incredible world and I must be blessed to have had the chance to experience it, even if things are not going so great for the community of Earth.


Poorly, if at all.


Yea our technological ability is not limiting humanity; our ethical or dare I say moral abilities are the limit.

I would waget that's probably what the Great Filter is. Survival in nature inevitably transitions into survival among other intelligence. Intelligence seeks to preserve itself, not its eventual mastery of the universe, and it'll be a hard battl to win even if 99% are good-natured, 1% will still undermine others for their own benefit.


AI is here and its use will increase no matter what you desire. It's really no different than any other technology. As a species, we've consistently been getting more and more efficient in most aspects of our lives. How that translates into the well being of the many is a completely different topic. Being more efficient is ultimately only a good thing. But that doesn't mean it will lead to a utopia.

Our problems always have been, and always will be, each other. You point to AI as a problem, but we all should really be looking in the mirror instead. It's up to all of us to solve the social troubles that will arise from continual increases in efficiency.

UBI is one idea that might help prevent a severe imbalance of resources in society. But it alone won't be enough to solve our social problems, and I don't think most people claim it will. It's a start.

If this is really something that keeps you awake at night, be a part of the solution and help us think of realistic ways to address these issues. Going backwards in time isn't an option.


Is AI really here? We process a lot of data, we’ve done so for years, and so far ML hasn’t been able to do it better than our in house analytical scientists.

I see how the automation of those analytical scientist work a huge companies like google, where you probably save quite a few millions on having less employees, but even at our 20.000 employee municipality AI is an expense. Sure we could maybe cut the data/analytical staff down to half the size if we replaced them with data scientists, but data scientists are more expensive and you’d also need to pay a license for whatever ML platform you use, and none of our POCs with various consultants or academics have provided us with anything that was better than what we already have. In fact, In every test, it turned out our own human build models were a lot better at predicting and auditing than ML.

Heh.

AI is also struggling outside of big datas. We’ve run a poc on self driving cars for three years now, and they don’t work, at all. They certainly can work, if we replace and maintain every piece of road infrastructure to their exact needs, but that’s never going to happen.

I guess AI will eventually find a usage outside of advertising and privacy invasion, and don’t get me wrong, we certainly use it for things like trolling through millions of old case files as part of our quality control. So it’s not completely useless, but I don’t see how it’ll change the world anytime soon.


Waymo have self driving cars without safety drivers going around Phoenix so they at least kind of work. I don't know if that will roll our globally in the near term but it might.


You're right. It's about our thinking. You and I. It's Alan Kay's quote he wants to shout. We need to get better at our solutions, and how we share them.


Agreed. There is a real poverty of deep "meta" thinking today.


There's always the same things i find disturbing in every alan kay notes i've seen recently : it's often a bunch of anecdotal (yet interesting) data put together that don't give the impression of making the talk move forward. The general idea is often pretty vague. it doesn't go much beyond general. There's always those tiny bits of self promotion (mentioning Xerox Parc inventing everything we have today), like someone that feels that his work has been underappreciated and lacks peer recognition.

It's like one of those general public TED inspirational talk except it tries to disguise itself as a scientific or philosophic one.

I always end up looking at the first 20 minutes because the anecdotes are entertaining in themselves, then i realize the talk doesn't go anywhere, and i go back to work.

(and then i go back to reading again the wikipedia page about alan kay to try to remember what made him so famous).

PS : as a comparison, i think Bret Victor talks are immensely more interesting. They also mention general use of computers today, make you think about it quite deeply, inspire you to try new things, and actually demonstrate something new every time. And you never hear him mention his work at Apple.


I have a lot of respect for Alan Kay, but know what you mean. Either I'm too dumb to understand much of what he's trying to convey due to historical and technical ignorance or lack of intelligence, his communication skills leave room to be desired, his talks are just hot air (I don't believe this), or human language isn't the best medium for conveying his impressive and large scale ideas.

I've seen people ask him great questions (or at least I thought they were) and they only got a riddle in reply. That's only acceptable if you're a dragon or a hobbit in my opinion.

Instead, I'd like to see a short an concise summary of his ideas of where we are, what we're doing wrong, where we could improve...etc.

It would certainly be interesting to see a future where hardware is built to run a Smalltalk os natively and you have full control all the way down, but he would probably say that I'm missing the point.


> I've seen people ask him great questions (or at least I thought they were) and they only got a riddle in reply. That's only acceptable if you're a dragon or a hobbit in my opinion.

Looking back on a lot of Alan Kay's writing, I've noticed that when he was young he tended to write extremely long winded explanations of what he was trying to do. As he got older, the explanations got more terse. Now he doesn't explain at all: he just asks a related question. In my mind, I don't think this is by accident.

I've been starting to go in the same direction. I'm prone to writing extremely expansive replies to questions (just see my posting history here ;-) ). Some people read it, but most will not. I found that while I lose some fidelity with a more terse answer, I get better traction from shorter answers. However, the audience I add by making my answers more digestible tends to interpret the answers literally -- meaning that they don't think past what I've written.

So I started wondering if replying with questions as Alan Kay seems to do now would be useful. It certainly has its advantages. Although it goes in the opposite direction of increasing your audience, if you feel that most people aren't going to "get it" anyway, perhaps that's not necessarily a loss. It also is a cue to say, "This is a complex issue and you need to go back and look at some fundamentals before you can understand the answer". Those people who aren't willing to do that, probably aren't willing to put the work into understanding the answer anyway. And finally, in the very likely case that my answer is not actually very good, asking a question instead allows the person to formulate a better answer than I can come up with. This latter bit is especially worth thinking about, I think.

But I've resisted doing that as it is certainly intimidating and in some ways makes you look like a jerk ;-). BTW, I recently read some of his comments on what it means to be "object oriented" and I tried my best to build a system based on what he said. The results were extremely illuminating for me. Whether or not it matches his view, I found that working hard on puzzling out his comments took me in valuable directions that I had not considered before.


I agree ... and could we see the system you mention in your last paragraph, please, or hear more about it?


I hate to link to my unfinished work, but here goes :-)

I started out by deciding to write a blog post. Here is my rambling second draft: https://github.com/ygt-mikekchar/oojs/blob/master/oojs.org#r...

However, as I was writing this blog post, it became obvious to me that I needed something better than Shape/Rectangle toy explanations. I needed to build something real. So I decided to build a test framework.

Here it is: https://gitlab.com/mikekchar/testy

I wrote a quick explanation of the design I was using here: https://gitlab.com/mikekchar/testy/blob/master/design.md You can read that first if you just want to see what I was doing and don't want the long winded explanation of how I got there from the blog post.

While I was writing this code, I decided to make a coding standard for myself because sometimes it helps to add constraints to examine how things are working. The coding standard is here: https://gitlab.com/mikekchar/testy/blob/master/coding_standa...

I suppose TL;DR: Objects should not represent ADTs. Instead they hold state. The object is really a collection of operations on that state (bundled together to give you better cohesion). The state is encapsulated in the object and should be inaccessible except through the operations. Objects should probably be immutable as well -- especially as it enforces that encapsulation. The best way to think about it is that the object itself is a monad (and it literally is). The methods on the object are functions that you would normally pass to bind. The "." that does the dispatch is essentially bind. I found that I never actually reached for subclass polymorphism (even though it was easy to implement). Instead I used essentially traits and I think this is in keeping with the modern idea that OO should encourage composition over inheritance.

Why do this instead of FP? Well, as you can see, my code is really FP with "FP Objects". I think the thing I liked about this is the idea that the objects created a nice abstraction for cohesive code. It's not that different than type classes (and I always think it's funny that type classes are usually implemented with virtual function tables). However, it is slightly more restricted in terms of generic functions. In a system without static type checking, I think it's easier to reason about this code. YMMV.

In the end, I found this way of programming very enjoyable. The code is still quite crufty, so please don't judge me ;-) I was just writing it to explore the ideas. It's not production code.

I considered going on and finishing Testy and possibly building something else with that style of programming, but it turns out that all current implementations of JS are quite inefficient when using closures (and may even leak memory!), so I decided to work on something else.

If you have any questions or comments, feel free to fire away.


I kind of see what you're saying. However, I'd much rather have a real and thorough answer than a question. I start by skimming, and if I need the info it is there.


As someone who basically feels that they "get" Alan Kay(although some talks have repeated content admittedly) this sounds like the response I'd give. He isn't in the business of peddling specific answers, but in encouraging Socratic questioning, and that leads to a pattern of thinking that avoids honing on a linear resolution process of slot A- goes-in-tab B. (Bret Victor goes much closer to doing this with the gee-whiz flashiness and gives off a kind of regurgitated-Engelbartisms vibe in the process, which is why I don't find him very exciting)


Either I'm too dumb to understand much of what he's trying to convey

I have a lot of respect for Alan Kay too but what you're hinting at is something that's bothered me about him for a long time: his "solution" for almost every problem is for everyone to be as smart as his Xerox PARC dream team.

He's like the Phil Jackson of technology: sure he's deserving of a ton of respect, but he wouldn't win anything with the 2012 Bobcats. Well that strategy doesn't work with society at large. You've got to work with the people who are, not those you wish would be.


One of the main takeaways in this talk addresses this directly. He's not advocating for everyone being smart, in fact being smart counts for very little. So how do we move forward as a species? The big ideas. However, it's hard to get adults to change and start using the big ideas so teaching kids is our best move.


His talk is exactly what he talks about: he presents ideas at a level of understanding of those listening to him. Instead of being scientific or philosophic, he gives interesting examples such that people immediately get it.

Alan's all about looking at the big picture and not getting lost in irrelevant details. He tries to get his point across right away by showing how NIMH is so focused on canonical mental health problems that they don't think about where these problems come from, nor about even bigger mental "problems"(such as the 99,77% if you watched through the talk) and what mental means in the first place.

He may not know how to fix healthcare, but he shows what is necessary for us to think about if we ever want problems to be solved. He then follows with examples from the areas he's more familiar with, namely tech and education, to show how to get to solutions without brute force.

Human critical thinking at its very best.


”Instead of being scientific or philosophic, he gives interesting examples such that people immediately get it.”

How is being scientific or philosophic exclusive from giving interesting examples that people immediately get?

I’d say being able to speak in a way that your audience gets is an inherent part of philosophy.

Being scientific isn’t being complicated, it’s basing what you say on a foundation of evidence.

Being philosophical isn’t being abstract, it’s basing what you say in reason.

If you’re neither of those things, you’re seldom worth listening to.


Yes, you are definitely right.

My point is that people refer to being scientific or philosophic as being detached from ordinary people, as if scientists or philosophers would purposely want to be seen as different and hard to understand. ( e.g., see the critique on the Deep Learning academic community by Jeremy Howard during the Fastai DL courses - an example that pops up quite often here on HN).

I meant that Alan doesn't have an arrogant attitude, but I could've phrased my sentence in a better way. Thanks for the feedback.


Have you looked at all the video ? This talk is actually about children education ( to build a better future). But then he brings deatchcamps, his dentist not removing plastic protection often, francis bacon new science, aristotle not knowing the number of teeth of his own wife, brings some statistics in the middle about mental disorder, etc etc.

It may be human critical thinking, but then please it needs some structure and a feeling of progression in a reasonning, actually leading somewhere ( and if that something could be anything new, more than just « computers could be better learning device » that would actually help).


or you could put the slightest modicum of effort into trying to see the very clear picture he has painted...


which is ? People make mistakes ? People are unperfect and need to improve ?

i don’t see any other common point among all those examples, and the talk didn’t give me any either...


I think this talk addresses the question "What do we need to do so that children born today have a healthy world to live in when they're 82, in the year 2100? Bret Victor is amazing and interesting, but he's not addressing the same issues. Nor are the issues he addresses nearly as important. If you only watched the first 20 minutes of this, you probably missed the point of needing to radically change how we educate children, and Alan Kay's ideas for how to do that, which as far as I'm concerned, are dead on the money.


I did end up watching the whole talk, just to make sure i didn’t miss the point.

What do you mean by his ideas on how to educate children ? Not dumbing down , and in the process use computers ?

I don’t think this in itself is anything original, unless you provide some examples on how that would look today ( the video sample in the talk shows a child manipulating a computer interface from the early 90s so i couldn’t tell how old that video was).

But more importantly why spend the first 40 minutes on general pseudo-scientific anecdotal statements, if you’re eager to talk about that ? Do you think people really need to be convinced education is important ?


> What do you mean by his ideas on how to educate children ? Not dumbing down , and in the process use computers ?

Well, I took away a lot more than 'not dumbing down'. One of the key things he described is the fact that very few people can sustain their attention for ingesting information, and that getting anyone engaged in practice and exploration at the right level is crucial to them successfully mastering concepts. The 't-ball for x' idea. Whether teachers are teaching stuff that's dumbed down or not turns out not to make a difference: People learn stuff by grappling with it in a 'hands on' way. Getting real mathematicians and scientists involved to create these kinds of environments is part of this. He led with saying this has been said before. My own take is that he said it very well, but clearly that's not everyone's experience. What were the pseudo-scientific statements that stand out for you? My detector didn't go off on anything as pseudo science.


Yes people really need to be convinced that good education is more important than most things. I know this at some level but the talk but a lot of things together. And it motivated me to at least think about the value of passing on some of the things I know (and to try do that in the 'honest, directed' way he described) rather than using that knowledge to just make a living.


I have always found interesting how some people seem to make an effort to justify to themselves - in all sorts of ways - their failure to grasp something.

Alan Kay is a supreme genius, there can be no question about that, but also an original and deep thinker who is not afraid of not falling inline with the prevailing ideology of the day. He is deeply rooted in the outside, the sphere of human thought and potential that iconoclastic transformative ideas spring from. And, here you come, unfavorably comparing him to Bret Victor rather than seeking to fill the gaps within you that do not let you appreciate what the genius is plainly talking about. In a way you are testament to the point Alan Kay is making. Your inability to grasp simple - yet unpopular and certainly not easy - ideas and your preference for easily-digestible material demonstrates the degree of cultural erosion in play today.

In a world where people like Tim Berners Lee are given the Turing award and people like Alan Kay are ostracized and ignored, languages like JavaScript and PHP proliferate whilst the best ideas behind Smalltalk and Lisp are still not widely used because the armies of 9-5 for-money “coders” are not in a position to appreciate them, in such a world you have people reaching for Bret Victor and describing an Alan Kay talk as boring. Sad state of affairs.


In a world where people like Tim Berners Lee are given the Turing award and people like Alan Kay are ostracized and ignored

Well, to be fair, Alan is a Turing Award recipient since 2003. He's hardly ostracized and if he's ignored, it's because his cause is ignored.

He speaks of a broken present leading to a wider broken future. And to conceive that things are broken requires a deep understanding of the fundamentals of what you're looking at. Is you car currently horribly inefficient? You probably (like me) would have no idea. It would require knowing what every part does and what was it designed to do and why. That's a big ask. It requires a deeply fundamental understanding of it all.

And this -- this -- is what he's talking about in the lecture. He's gone from proselytizing the brokenness to preaching how to build the new mindset that can recognize it, which is the first and best shot at addressing it.


I dunno, it ties together rather well, IMO. It's only our future, intellectual honesty, social divides, and everything wrong with society addressed with the wisdom and distillation of decades of well formed thought condensed into presentation form. No big deal. This is like a holy revelation and you don't get it????


I agree, and there are only maybe half a dozen people in the whole world whose tidbits of wisdom are so good, and so well delivered, and so inspiring, that I don't mind them having this vagueness. Alan Kay is one of them.

If everyone was like this (and I agree, a lot of TED speakers are), I'd go crazy.


vagueness? he's simply discussing comparative pedagogical models and techniques! jumping around a bit = we've been culturally ok with that since mtv circa 1993 or the beatnik authors of the late 50's or virtually any film or literature that jumps around in the narrative timeline.


... I'll grant you, the Bret-Victor-esque "scratch demo" stuff was a bit of a non-sequitur. Once he got back to the limits of human cognition stuff and the "t-ball for human sciences" part, I felt we were on track for a proper lesson from an elder.


According to Alan Kay's talk less than a quarter of one percent of people are conscientious objectors when push comes to shove. That certainly limits the possibilities of the future. Despite literacy and history being studied very few dare to be a Daniel.


With any production/consumer scenario it is usually 1 to 1000 or more ratio. Content producers and people that interact rather than lurk are a 1 to 1000 or more. This here article probably thousands view it, and only a trickle comment, some participate, others mostly observe. This ratio also shows up in conversions, customer purchases and more.

The fact is most of human existence is observing, studying, thinking, following, viewing existing knowledge while very few interact or create new production or are producing at one time on one subject or concentration, though we all are the 1s for something.

The key is when you are that 1 to the 1000s or more is to be real, truthful, provide value, wisdom, entertainment and be interesting and that can draw in more 1s to an eventual critical mass and success or world changing paths. The trickle of small change eventually reaches critical mass but growing slowly, in similar conversion rates that start to compound.

Humans are a differentiation machine, they follow along the paths of the branches of success or current realities and at a certain point differentiate themselves by forging a new path, others then weigh the pros and cons and sometimes that path becomes a main branch, but new branches take risk, dedication, resources and creativity, as well as convincing others to follow.

The bright side way to look at it is, there is always a quarter of one percent of people that are conscientious objectors, speak up, make change, innovate, produce, create or more all the time.


I find anecdotally that someone usually has something that they are the 1/1000 over. For some people it's comment threads or sharing articles with friends but somewhere they fall into a 1/1000 category where they put their creative effort. Not to say that there is a large amount of people that will read your comment and not respond.

The other thing that may be the case is the overview effect which was recently brought up, it may take 40-50 seconds to read a post but if it takes everyone 10 minutes to write a post then even if everyone on HN spend a quarter of their time writing posts you would still get the 1/100 ratio. Just food for the thought.


That stat is bogus. Maybe technically correct based upon the status when you register for selective service, but look at the number of objectors during the Vietnam era:

> A total of 170,000 men received C.O. deferments; as many as 300,000 other applicants were denied deferment. Nearly 600,000 illegally evaded the draft; about 200,000 were formally accused of draft offenses. Between 30,000 and 50,000 fled to Canada; another 20,000 fled to other countries or lived underground in America.

Source https://www.swarthmore.edu/library/peace/conscientiousobject...


Is "conscientious objector" a formally defined status? i.e. "you are a conscientious objector iff you object to being drafted for the US military", or is it a analogous to "this person will publicly speak out and/or rebel against horrors?"

Because I had the feeling while watching Alan's talk that he wasn't talking about military drafts -- which is exclusively male anyway and he's talking about people, not men -- but rather, that he was talking about people the more informal second definition

Or stated differently; people who remained passive during wars, even if they were actually against the mass murders and large scale horrors.

Then again, if it's the second definition, where in the world did he get those statistics from, how would you even reliably determine who has or hasn't publicly stated their position, ... I mean, there's no reliable written historical record of this position, is there?


It is formally defined. You can declare your a C.O. when you register for Selective Service.

https://www.sss.gov/consobj


Okay yeah, he's absolutely and clearly talking about the military drafts, he literally says it in his talk. Loosely quoted, "if we look at conscientious objectors for military drafts for all wars during the 20th century, it's always about a quarter of a percentage"...

Then again, reading your source, I get the impression that the vietnam war had an especially and notably high level of public disagreement [emphasis mine]

> The Vietnam War, as it is popularly called although war was never officially declared by the United States, produced a very organized network of draft resisters and supporters [for more information about the history of the Vietnam War, click here]. Rejection of conscription stemmed from opposition to militarism and war itself, to disagreement with the United States' foreign policy in Indochina, and/or to the belief that the draft epitomized injustice as it was weighted heavily against African Americans, the poor, and the less educated. Whatever the reason, a sizable contingent of young men declared that this armed conflict at least had no claim on them. During this time, draft counseling services expanded sizably, and groups were formed all over the country to provide support for draft resisters. As dissent spread, it polarized new constituencies among professionals, civil rights groups, and women's organizations. Massive anti-war rallies were held, as well as rallies in which hundreds of young men turned in or burned their draft cards. GI resister groups spread, so that dissent was coming from the armed forces as well as those not yet in the military.

WWII, same page

> During WWII, the Selective Training and Service Act of 1940 dictated the terms by which more than 34 million American men, ages 18 to 44, participated in the war effort. Of the men who registered for the draft, there were 72,354 who applied for conscientious objector status. Of those, 25,000 accepted noncombatant service in the army, agreeing to work for the medical Corps or in anything that did not involve actual combat. Another 27,000 failed the basic physical examination. In the end, 6,086 C.O.s (4,441 of them Jehovah's Witnesses) went to prison for refusing to cooperate with Selective Service. Another 12,000 men entered Civilian Public Service (CPS), a program under civilian direction designed to accommodate C.O.s by having them do "work of national importance." [cf. Keim]

That is, 72,000/34,000,000 ~= 72/34,000 ~= ... 0.2%

... Buut if we read on, post-vietnam war

> Draft registration was reinstituted in July 1980; from then until 1985, over 500,000 men refused or failed to register.

It's hard, because I have to estimate the amount of people that were eligible for draft, but let's say the ratio is about the same as it was in 1940

in 1940, there were ~130mln people in the US ~= 65mln men, 34mln draftable, 34mln/65mln = 52% of men were draftable, so say about a fourth of the population

In 1985, there were ~238mln people in the US, so about 60mln men draftable.

500,000 / 60,000,000 = 0.83%, which is still low, but considerably higher than 0.2%, it's closer to 1%. [[edit: and I just noticed that the 500,000 includes failed to register, whereas the earlier statistic doesn't. This would lower the estimation by quite a bit, I'm not going to redo it. But I guess it'd be around 0.2-0.3% again]]

So oddly enough, I think you're right, Alan seems to be off with his statistic. The only thing that comes to mind quickly is that he's not talking about the US population, but rather the world population, but even then it's a bit weird, because he's specifically talking about conscientious objection in 20th century.

Then again, even if the total and average amount of conscientious objectors was as "high" as 1%, his point would still stand, 99% of people aren't against killing if the culture allows it. Even a jump of an entire order of magnitude isn't enough to make his point any less valid, imho.


That's weird then, I don't really see Alan as the kind of person who'd gloss over something so obvious, especially after what he told about how it is basically the very thing that drives his lifetime of work...

... Not to forget his remark about the philosopher who got the number of teeth women have wrong. You're just insulting yourself if you make a similar mistake anywhere during the rest of your talk, lol.


I don't know what you're referring to with "the philosopher getting women's number of teeth wrong". Aristotle did get it wrong. There are scholarly work exploring why he got it wrong. Or am I just completely missing your point?


I forgot Aristotle's name, and I was in Mobile so I couldn't watch the video to look it up.

My point was, making an example out of Aristotle's mistake is only "legitimate" if you yourself don't make any major mistakes in your talk. Ie, if Alan's wrong about the 0.23%, and then makes fun of Aristotle for getting a number wrong, them you're [Alan] just making a fool yourself.

I don't question at all that Aristotle got the number of teeth wrong, lol.


Also Jesus Christ I hate writing on my phone, autocorrect manages to put words in places were I never wanted them to appear ;_;


* Funny name coincidence between the organization NIH (National Institutes of Health) and the notion of NIH (Not Invented Here). Because this lecture is all about reinventing computing paradigms.

* The video works fine in Google Chrome, but in Mozilla Firefox it seems to force the Adobe Flash player. Not cool.


Here is a youtube version: https://youtu.be/dTPI6wh-Lr0


Do you have DRM video disabled in Firefox? Maybe it's using Flash as a fallback.


Works fine for me, no Flash.

Firefox 63 on Windows, DNT enabled, uBlock Origin and Privacy Badger extensions.


Curious; I'm also on 63 (though on Linux) and it is showing Flash here. Though I do have the plugin enabled, so maybe it defaults to it in that case.


Linux + FF 62.0.3 seems to work just fine.


Firefox Developer Edition 64 on Mac w/ uMatrix and bunch of other extensions. JW Player seems to be playing the video for me.


(youtube-dl works)


this is mindblowingly good.


I love Alan Kay but a significant portion of this talk is TEDx-like garbage: The wars, revolutions, and genocides of the past are about power not mental health. Conscientious objectors also not by default good people.

If you define most people as mentally ill what does that say about your own diagnosis. The human brain has been selected (in an evolutionary way) and its errors if they exist at all are likely going to be related to its unnatural environment.

His example with the dentist and the plastic is actually just an example of skin in the game. The dentist doesn't have the skin in the game, the patient does, so the patient notices.

Sure older educated people could help the next generation but given the financial inter-generational put they have participated in I think its obvious how much they actually care about the future. As the thought leader of their time said "Its not me who will die, its the world that will end."


You make the big assumption that such "mental illness" lies in a fault in the brain. Have rates of mental illness remained the same in all categories in the last 500 years?

When we don't know something, we ought not to operate as if we believe conclusions made on the basis of what we have not confirmed.


>we ought not to operate as if we believe conclusions made on the basis of what we have not confirmed.

What?


"Dont form generalizations or conclusions based on what you dont know."


At 32:09 his "brain" means genetics culture language schools.

I dont think individual mental illness plays much of a role. Most people in power are rational self interested actors.


> Most people in power are rational self interested actors.

That gave me a chuckle


Needs a [video] tag.

Also: the non-click-baity title is "Is it too late to create the future?" (The answer, by Betteridge's law of headlines [1], is of course "no".)

[1] https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...


Here is what Alan Kay had as the title which was changed by the NIH people (talk starts at about 4 min in):

If "the best way to predict the future is to invent it", is it too late to invent a healthy future?

The answer to this one is not clear and Kay's answer in this talk would be, "It depends on what we teach the children of today."

I would highly recommend watching the whole thing. Also: He says he started making up these pity slogans because most people can't read/understand a paragraph of text.


Thank you for that link. Betteridge's law should probably be formulated to have criteria beyond "Does it end in a question mark?" A quick scan of the current CNN headlines turns up the following:

* Why is this star MLB pitcher being ignored? (N/A)

* Who do you think should be CNN Hero of the Year? (N/A)

* What's behind Whitey Bulger's death? (N/A)

* Facebook is pivoting. Will users follow? (No!)


It's almost a 100%-accurate heuristic if you change the criterion to "Does the headline end with a yes-or-no question?".


What irks me in the title is the "the". As if there's only one possible future, "the future", and if we can't attain it, we're toast.

If past teaches us anything, it is that possible futures are many, our understanding of what's possible is fluid, and predicting and planning future is hard.


If one watches the actual lecture, Kay says that his real title was altered by an "NIH low pass filter".

The actual title was "Is it too late to create a healthy future?"


Thanks! We've updated the headline here.


In this case, the choice might be practical, as futures are generally taken to be financial instruments in the US.


Zero article could work. Future as a non-measurable substance, like water. Future as something that does not need a specification with a "the", like captain on a ship.


I have trouble imagining a sentence where a speaker would truly have difficulty disambiguating the two senses. People rarely have trouble disambiguating even incredibly homophonous words like set, run, or bank.


I have trouble imagining such sentence where a speaker wouldn't think it's about banking. "Futures" plural referring to time is something I've only seen in incredibly nerdy phrases, like "set of all possible futures".


hence the plural in Futures Studies

https://en.wikipedia.org/wiki/Futures_studies

...blame it on the low-pass filter of mass communication


I haven’t watched the video, but I’d love to know what other peoples answer to the question on is.

> Is it too late to create a healthy future?

Thoughts?

Personally I don’t think so.

Creating a healthy future, depends on defining what is unhealthy (within reason, this is also subjective).

Providing context and teaching people/our children why we think it is unhealthy.

Letting them decide on their own.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: