Boy if I could shout this from the rooftops to all the AI "luminaries" or people that think AI is going to deliver us to some utopian future where work is now unnecessary and we are all free to "chase our passions" (and along with it, this junk notion that UBI is somehow now fundamentally required to continue society), I would. This wisdom is greatly needed for all the current "future techies" out there.
As for UBI, it's one way of addressing the fact that our civilization is mature enough to abandon the concept of having to work to be able to live (not to mention, non-bullshit jobs are steadily disappearing).
Or at least make the concept universal. It's been alive and well for the ruling classes for centuries.
But that's all irrelevant in any case, because the issue in question here wasn't working or not-working, but having to work to be permitted to live. Most of us have to. The ruling classes don't.
But, thus far, our best AIs are simply pattern recognition systems strung together like Christmas lights.
There's also no reason to think that 'intelligence' can be exponentially scaled or that any system can recognize how to improve itself -- or have the capacity to do so in situ.
Completely agree. I do believe calling current run of ML/DNNs "AI" is stretching it (and I expect to be hit with the "AI is a moving goalpost" argument but seriously? is it a moving goalpost for those who know how the current generation of solutions work?).
> There's also no reason to think that 'intelligence' can be exponentially scaled or that any system can recognize how to improve itself -- or have the capacity to do so in situ.
There's no reason to assume otherwise. Exponential scaling (i.e. "intelligence explosion") may be a contestable assumption, but really, we're in big trouble if a general AI reaches human levels. We know this level is possible because we exist. Even if AI did just that, we'd have to contest with a new sapient being, and give it ethical consideration. But I see nothing preventing that AI from being as smart as 10 or 100 humans combined.
The rest really follows from nature of technological artifacts - if you can make one, you can make one thousand (applies to both hardware and software) and make it work as one; you can iterate on it, and if you can, then a human-level AI will be able to do it too.
Who knows, maybe in a 1000 years? i don't see it happening in our lifetimes.
And what are all those AIs supposed to be working on? Based on the direction it's currentlying heading, it looks like we'll just be getting more the same stuff we don't need.
If AI is going to help humanity, it needs to be applied to maslow's hierarchy of needs. 99% of society still isn't past the first row. Look at the bottom row, it's pretty simple: Shelter, Food, Water and by extension to afford those things: transportation and medical insurance.
I would waget that's probably what the Great Filter is. Survival in nature inevitably transitions into survival among other intelligence. Intelligence seeks to preserve itself, not its eventual mastery of the universe, and it'll be a hard battl to win even if 99% are good-natured, 1% will still undermine others for their own benefit.
Our problems always have been, and always will be, each other. You point to AI as a problem, but we all should really be looking in the mirror instead. It's up to all of us to solve the social troubles that will arise from continual increases in efficiency.
UBI is one idea that might help prevent a severe imbalance of resources in society. But it alone won't be enough to solve our social problems, and I don't think most people claim it will. It's a start.
If this is really something that keeps you awake at night, be a part of the solution and help us think of realistic ways to address these issues. Going backwards in time isn't an option.
I see how the automation of those analytical scientist work a huge companies like google, where you probably save quite a few millions on having less employees, but even at our 20.000 employee municipality AI is an expense. Sure we could maybe cut the data/analytical staff down to half the size if we replaced them with data scientists, but data scientists are more expensive and you’d also need to pay a license for whatever ML platform you use, and none of our POCs with various consultants or academics have provided us with anything that was better than what we already have. In fact, In every test, it turned out our own human build models were a lot better at predicting and auditing than ML.
AI is also struggling outside of big datas. We’ve run a poc on self driving cars for three years now, and they don’t work, at all. They certainly can work, if we replace and maintain every piece of road infrastructure to their exact needs, but that’s never going to happen.
I guess AI will eventually find a usage outside of advertising and privacy invasion, and don’t get me wrong, we certainly use it for things like trolling through millions of old case files as part of our quality control. So it’s not completely useless, but I don’t see how it’ll change the world anytime soon.
It's like one of those general public TED inspirational talk except it tries to disguise itself as a scientific or philosophic one.
I always end up looking at the first 20 minutes because the anecdotes are entertaining in themselves, then i realize the talk doesn't go anywhere, and i go back to work.
(and then i go back to reading again the wikipedia page about alan kay to try to remember what made him so famous).
PS : as a comparison, i think Bret Victor talks are immensely more interesting. They also mention general use of computers today, make you think about it quite deeply, inspire you to try new things, and actually demonstrate something new every time. And you never hear him mention his work at Apple.
I've seen people ask him great questions (or at least I thought they were) and they only got a riddle in reply. That's only acceptable if you're a dragon or a hobbit in my opinion.
Instead, I'd like to see a short an concise summary of his ideas of where we are, what we're doing wrong, where we could improve...etc.
It would certainly be interesting to see a future where hardware is built to run a Smalltalk os natively and you have full control all the way down, but he would probably say that I'm missing the point.
Looking back on a lot of Alan Kay's writing, I've noticed that when he was young he tended to write extremely long winded explanations of what he was trying to do. As he got older, the explanations got more terse. Now he doesn't explain at all: he just asks a related question. In my mind, I don't think this is by accident.
I've been starting to go in the same direction. I'm prone to writing extremely expansive replies to questions (just see my posting history here ;-) ). Some people read it, but most will not. I found that while I lose some fidelity with a more terse answer, I get better traction from shorter answers. However, the audience I add by making my answers more digestible tends to interpret the answers literally -- meaning that they don't think past what I've written.
So I started wondering if replying with questions as Alan Kay seems to do now would be useful. It certainly has its advantages. Although it goes in the opposite direction of increasing your audience, if you feel that most people aren't going to "get it" anyway, perhaps that's not necessarily a loss. It also is a cue to say, "This is a complex issue and you need to go back and look at some fundamentals before you can understand the answer". Those people who aren't willing to do that, probably aren't willing to put the work into understanding the answer anyway. And finally, in the very likely case that my answer is not actually very good, asking a question instead allows the person to formulate a better answer than I can come up with. This latter bit is especially worth thinking about, I think.
But I've resisted doing that as it is certainly intimidating and in some ways makes you look like a jerk ;-). BTW, I recently read some of his comments on what it means to be "object oriented" and I tried my best to build a system based on what he said. The results were extremely illuminating for me. Whether or not it matches his view, I found that working hard on puzzling out his comments took me in valuable directions that I had not considered before.
I started out by deciding to write a blog post. Here is my rambling second draft: https://github.com/ygt-mikekchar/oojs/blob/master/oojs.org#r...
However, as I was writing this blog post, it became obvious to me that I needed something better than Shape/Rectangle toy explanations. I needed to build something real. So I decided to build a test framework.
Here it is: https://gitlab.com/mikekchar/testy
I wrote a quick explanation of the design I was using here: https://gitlab.com/mikekchar/testy/blob/master/design.md You can read that first if you just want to see what I was doing and don't want the long winded explanation of how I got there from the blog post.
While I was writing this code, I decided to make a coding standard for myself because sometimes it helps to add constraints to examine how things are working. The coding standard is here: https://gitlab.com/mikekchar/testy/blob/master/coding_standa...
I suppose TL;DR: Objects should not represent ADTs. Instead they hold state. The object is really a collection of operations on that state (bundled together to give you better cohesion). The state is encapsulated in the object and should be inaccessible except through the operations. Objects should probably be immutable as well -- especially as it enforces that encapsulation. The best way to think about it is that the object itself is a monad (and it literally is). The methods on the object are functions that you would normally pass to bind. The "." that does the dispatch is essentially bind. I found that I never actually reached for subclass polymorphism (even though it was easy to implement). Instead I used essentially traits and I think this is in keeping with the modern idea that OO should encourage composition over inheritance.
Why do this instead of FP? Well, as you can see, my code is really FP with "FP Objects". I think the thing I liked about this is the idea that the objects created a nice abstraction for cohesive code. It's not that different than type classes (and I always think it's funny that type classes are usually implemented with virtual function tables). However, it is slightly more restricted in terms of generic functions. In a system without static type checking, I think it's easier to reason about this code. YMMV.
In the end, I found this way of programming very enjoyable. The code is still quite crufty, so please don't judge me ;-) I was just writing it to explore the ideas. It's not production code.
I considered going on and finishing Testy and possibly building something else with that style of programming, but it turns out that all current implementations of JS are quite inefficient when using closures (and may even leak memory!), so I decided to work on something else.
If you have any questions or comments, feel free to fire away.
I have a lot of respect for Alan Kay too but what you're hinting at is something that's bothered me about him for a long time: his "solution" for almost every problem is for everyone to be as smart as his Xerox PARC dream team.
He's like the Phil Jackson of technology: sure he's deserving of a ton of respect, but he wouldn't win anything with the 2012 Bobcats. Well that strategy doesn't work with society at large. You've got to work with the people who are, not those you wish would be.
Alan's all about looking at the big picture and not getting lost in irrelevant details. He tries to get his point across right away by showing how NIMH is so focused on canonical mental health problems that they don't think about where these problems come from, nor about even bigger mental "problems"(such as the 99,77% if you watched through the talk) and what mental means in the first place.
He may not know how to fix healthcare, but he shows what is necessary for us to think about if we ever want problems to be solved. He then follows with examples from the areas he's more familiar with, namely tech and education, to show how to get to solutions without brute force.
Human critical thinking at its very best.
How is being scientific or philosophic exclusive from giving interesting examples that people immediately get?
I’d say being able to speak in a way that your audience gets is an inherent part of philosophy.
Being scientific isn’t being complicated, it’s basing what you say on a foundation of evidence.
Being philosophical isn’t being abstract, it’s basing what you say in reason.
If you’re neither of those things, you’re seldom worth listening to.
My point is that people refer to being scientific or philosophic as being detached from ordinary people, as if scientists or philosophers would purposely want to be seen as different and hard to understand. ( e.g., see the critique on the Deep Learning academic community by Jeremy Howard during the Fastai DL courses - an example that pops up quite often here on HN).
I meant that Alan doesn't have an arrogant attitude, but I could've phrased my sentence in a better way. Thanks for the feedback.
It may be human critical thinking, but then please it needs some structure and a feeling of progression in a reasonning, actually leading somewhere ( and if that something could be anything new, more than just « computers could be better learning device » that would actually help).
i don’t see any other common point among all those examples, and the talk didn’t give me any either...
What do you mean by his ideas on how to educate children ? Not dumbing down , and in the process use computers ?
I don’t think this in itself is anything original, unless you provide some examples on how that would look today ( the video sample in the talk shows a child manipulating a computer interface from the early 90s so i couldn’t tell how old that video was).
But more importantly why spend the first 40 minutes on general pseudo-scientific anecdotal statements, if you’re eager to talk about that ? Do you think people really need to be convinced education is important ?
Well, I took away a lot more than 'not dumbing down'. One of the key things he described is the fact that very few people can sustain their attention for ingesting information, and that getting anyone engaged in practice and exploration at the right level is crucial to them successfully mastering concepts. The 't-ball for x' idea. Whether teachers are teaching stuff that's dumbed down or not turns out not to make a difference: People learn stuff by grappling with it in a 'hands on' way. Getting real mathematicians and scientists involved to create these kinds of environments is part of this. He led with saying this has been said before. My own take is that he said it very well, but clearly that's not everyone's experience. What were the pseudo-scientific statements that stand out for you? My detector didn't go off on anything as pseudo science.
Alan Kay is a supreme genius, there can be no question about that, but also an original and deep thinker who is not afraid of not falling inline with the prevailing ideology of the day. He is deeply rooted in the outside, the sphere of human thought and potential that iconoclastic transformative ideas spring from. And, here you come, unfavorably comparing him to Bret Victor rather than seeking to fill the gaps within you that do not let you appreciate what the genius is plainly talking about. In a way you are testament to the point Alan Kay is making. Your inability to grasp simple - yet unpopular and certainly not easy - ideas and your preference for easily-digestible material demonstrates the degree of cultural erosion in play today.
Well, to be fair, Alan is a Turing Award recipient since 2003. He's hardly ostracized and if he's ignored, it's because his cause is ignored.
He speaks of a broken present leading to a wider broken future. And to conceive that things are broken requires a deep understanding of the fundamentals of what you're looking at. Is you car currently horribly inefficient? You probably (like me) would have no idea. It would require knowing what every part does and what was it designed to do and why. That's a big ask. It requires a deeply fundamental understanding of it all.
And this -- this -- is what he's talking about in the lecture. He's gone from proselytizing the brokenness to preaching how to build the new mindset that can recognize it, which is the first and best shot at addressing it.
If everyone was like this (and I agree, a lot of TED speakers are), I'd go crazy.
The fact is most of human existence is observing, studying, thinking, following, viewing existing knowledge while very few interact or create new production or are producing at one time on one subject or concentration, though we all are the 1s for something.
The key is when you are that 1 to the 1000s or more is to be real, truthful, provide value, wisdom, entertainment and be interesting and that can draw in more 1s to an eventual critical mass and success or world changing paths. The trickle of small change eventually reaches critical mass but growing slowly, in similar conversion rates that start to compound.
Humans are a differentiation machine, they follow along the paths of the branches of success or current realities and at a certain point differentiate themselves by forging a new path, others then weigh the pros and cons and sometimes that path becomes a main branch, but new branches take risk, dedication, resources and creativity, as well as convincing others to follow.
The bright side way to look at it is, there is always a quarter of one percent of people that are conscientious objectors, speak up, make change, innovate, produce, create or more all the time.
The other thing that may be the case is the overview effect which was recently brought up, it may take 40-50 seconds to read a post but if it takes everyone 10 minutes to write a post then even if everyone on HN spend a quarter of their time writing posts you would still get the 1/100 ratio. Just food for the thought.
> A total of 170,000 men received C.O. deferments; as many as 300,000 other applicants were denied deferment. Nearly 600,000 illegally evaded the draft; about 200,000 were formally accused of draft offenses. Between 30,000 and 50,000 fled to Canada; another 20,000 fled to other countries or lived underground in America.
Because I had the feeling while watching Alan's talk that he wasn't talking about military drafts -- which is exclusively male anyway and he's talking about people, not men -- but rather, that he was talking about people the more informal second definition
Or stated differently; people who remained passive during wars, even if they were actually against the mass murders and large scale horrors.
Then again, if it's the second definition, where in the world did he get those statistics from, how would you even reliably determine who has or hasn't publicly stated their position, ... I mean, there's no reliable written historical record of this position, is there?
Then again, reading your source, I get the impression that the vietnam war had an especially and notably high level of public disagreement [emphasis mine]
> The Vietnam War, as it is popularly called although war was never officially declared by the United States, produced a very organized network of draft resisters and supporters [for more information about the history of the Vietnam War, click here]. Rejection of conscription stemmed from opposition to militarism and war itself, to disagreement with the United States' foreign policy in Indochina, and/or to the belief that the draft epitomized injustice as it was weighted heavily against African Americans, the poor, and the less educated. Whatever the reason, a sizable contingent of young men declared that this armed conflict at least had no claim on them. During this time, draft counseling services expanded sizably, and groups were formed all over the country to provide support for draft resisters. As dissent spread, it polarized new constituencies among professionals, civil rights groups, and women's organizations. Massive anti-war rallies were held, as well as rallies in which hundreds of young men turned in or burned their draft cards. GI resister groups spread, so that dissent was coming from the armed forces as well as those not yet in the military.
WWII, same page
> During WWII, the Selective Training and Service Act of 1940 dictated the terms by which more than 34 million American men, ages 18 to 44, participated in the war effort. Of the men who registered for the draft, there were 72,354 who applied for conscientious objector status. Of those, 25,000 accepted noncombatant service in the army, agreeing to work for the medical Corps or in anything that did not involve actual combat. Another 27,000 failed the basic physical examination. In the end, 6,086 C.O.s (4,441 of them Jehovah's Witnesses) went to prison for refusing to cooperate with Selective Service. Another 12,000 men entered Civilian Public Service (CPS), a program under civilian direction designed to accommodate C.O.s by having them do "work of national importance." [cf. Keim]
That is, 72,000/34,000,000 ~= 72/34,000 ~= ... 0.2%
... Buut if we read on, post-vietnam war
> Draft registration was reinstituted in July 1980; from then until 1985, over 500,000 men refused or failed to register.
It's hard, because I have to estimate the amount of people that were eligible for draft, but let's say the ratio is about the same as it was in 1940
in 1940, there were ~130mln people in the US ~= 65mln men, 34mln draftable, 34mln/65mln = 52% of men were draftable, so say about a fourth of the population
In 1985, there were ~238mln people in the US, so about 60mln men draftable.
500,000 / 60,000,000 = 0.83%, which is still low, but considerably higher than 0.2%, it's closer to 1%. [[edit: and I just noticed that the 500,000 includes failed to register, whereas the earlier statistic doesn't. This would lower the estimation by quite a bit, I'm not going to redo it. But I guess it'd be around 0.2-0.3% again]]
So oddly enough, I think you're right, Alan seems to be off with his statistic. The only thing that comes to mind quickly is that he's not talking about the US population, but rather the world population, but even then it's a bit weird, because he's specifically talking about conscientious objection in 20th century.
Then again, even if the total and average amount of conscientious objectors was as "high" as 1%, his point would still stand, 99% of people aren't against killing if the culture allows it. Even a jump of an entire order of magnitude isn't enough to make his point any less valid, imho.
... Not to forget his remark about the philosopher who got the number of teeth women have wrong. You're just insulting yourself if you make a similar mistake anywhere during the rest of your talk, lol.
My point was, making an example out of Aristotle's mistake is only "legitimate" if you yourself don't make any major mistakes in your talk. Ie, if Alan's wrong about the 0.23%, and then makes fun of Aristotle for getting a number wrong, them you're [Alan] just making a fool yourself.
I don't question at all that Aristotle got the number of teeth wrong, lol.
* The video works fine in Google Chrome, but in Mozilla Firefox it seems to force the Adobe Flash player. Not cool.
Firefox 63 on Windows, DNT enabled, uBlock Origin and Privacy Badger extensions.
If you define most people as mentally ill what does that say about your own diagnosis.
The human brain has been selected (in an evolutionary way) and its errors if they exist at all are likely going to be related to its unnatural environment.
His example with the dentist and the plastic is actually just an example of skin in the game.
The dentist doesn't have the skin in the game, the patient does, so the patient notices.
Sure older educated people could help the next generation but given the financial inter-generational put they have participated in I think its obvious how much they actually care about the future. As the thought leader of their time said "Its not me who will die, its the world that will end."
When we don't know something, we ought not to operate as if we believe conclusions made on the basis of what we have not confirmed.
I dont think individual mental illness plays much of a role. Most people in power are rational self interested actors.
That gave me a chuckle
Also: the non-click-baity title is "Is it too late to create the future?" (The answer, by Betteridge's law of headlines , is of course "no".)
If "the best way to predict the future is to invent it", is it too late to invent a healthy future?
The answer to this one is not clear and Kay's answer in this talk would be, "It depends on what we teach the children of today."
I would highly recommend watching the whole thing. Also: He says he started making up these pity slogans because most people can't read/understand a paragraph of text.
* Why is this star MLB pitcher being ignored? (N/A)
* Who do you think should be CNN Hero of the Year? (N/A)
* What's behind Whitey Bulger's death? (N/A)
* Facebook is pivoting. Will users follow? (No!)
If past teaches us anything, it is that possible futures are many, our understanding of what's possible is fluid, and predicting and planning future is hard.
The actual title was "Is it too late to create a healthy future?"
...blame it on the low-pass filter of mass communication
> Is it too late to create a healthy future?
Personally I don’t think so.
Creating a healthy future, depends on defining what is unhealthy (within reason, this is also subjective).
Providing context and teaching people/our children why we think it is unhealthy.
Letting them decide on their own.