Everything that Drew McDermott thought wouldn't really happen did:
> "To sketch a worst case scenario, suppose that five years from now the strategic computing initiative collapses mis- erably as autonomous vehicles fail to roll. The fifth gen- eration turns out not to go anywhere, and the Japanese government immediately gets out of computing. Every startup company fails. Texas Instruments and Schlumber- ger and all other companies lose interest. And there’s a big backlash so that you can’t get money for anything con- nected with AI. Everybody hurriedly changes the names of their research projects to something else. This condition, called the “AI Winter” by some, prompted someone to ask me if “nuclear winter” were the situation where fund- ing is cut off for nuclear weapons. So that’s the worst case scenario.
> "I don’t think this scenario is very likely to happen, nor even a milder version of it."
Exactly that happened, from the autonomous vehicles not panning out to funding evaporating. I remember it well.
Then Waldrop's comment could almost have been written last week (except for the loss of Marvin, RIP), down to the confusion between computers and "AI".
>The third conversation was with Bob Wilensky, a former student of mine, who asked me what I was going to
do on this panel. He asked if I thought doomsday was
coming. I said, “Yes.” And he said, “No, you’re wrong.” I
asked why. He said, ‘It’s already here. There’s no content
in this conference.” Now I think there’s something serious to be concerned about there. He isn’t the only person
I’ve heard express that view. If that’s true and there’s no
content in this conference, then doomsday is already here.
>I got
scared when big business started getting into this- Schlumberger, Xerox, Hewlett-Packard, Texas Instruments, GTE,
Amico, Exxon,-they were all making investments- they
all have AI group. You start to wonder who could be in the
AI groups. We haven’t got that many people in AI. And
you find out that, those people weren’t trained in AI. They
read an AI book, in many of these cases. They started off
reading all the best AI research. After a while you discover
AI group after AI group whose people were only peripherally in AI in the first place. What’s going to happen
is that those companies will find that their groups aren’t
producing as well as they had expected. When they find
that, they will complain; they will say nasty things about
AI. The presidents of those companies will be talking to
the people who are not at ARPA, but at the Secretary of
Defense level.
Schlumberger had Marty Tenenbaum; Xerox had Alan Kay (and a host of others), etc. Also the defense guys were recruiting heavily from MIT, Stanford CMU et al AI labs. Exxon etc got their "AI" through Schlumberger and didn't want scientists, but business folks.
Does give the research perspective of the 80s though: ARPA was still considered the big fish. Now they aren't a major player. Significant still, yes, but not the way they were, either in structure or in % of overall investment.
There's another big difference though - my perception is that the gap between business and science in 80's was vast... but now I think that most Stanford and MIT faculty are involved in business ventures.
> "The computer seems to be a mythic emblem for a bright, high-tech future that is going to make our lives so much easier." This time, there's a big groundswell of distrust and anger against "Big Tech."
There was a huge level fear about digital tech (and others, like nukes) from the 60s ("Do not fold, spindle or mutilate" was widely cited as a sign of soulless Big Tech) through the 80s (with its fear of 1984 and future shock). If anything today is much more accepting of increased digitization of life. Certainly today people fear AI and computing to a much smaller degree than in the 1980s.
There is some distrust of the big tech companies now, but still far from the level of the 60s.
I don't remember the 60s that well but certainly when I was in University in the early 80s a lot of nontechnical stuff we read at MIT was on the perspectives of technology from the 60s-80s.
Pardon my ignorance then. Would it be accurate to distinguish between fear of tech (high then, low now) versus tech enabled capitalism (??? then, high now)?
I don't think there's much fear of technology right now; certainly it's at its lowest in my lifetime.
There is some hysteria whipped up by the press and certain politicians against a few large corps, but it's no different from any other short-term hysteria. For the press it's understandable: FB et al challenge their position, an environment in which many had set their careers. S why not attack it? For the politicians it's the usual grab bag of short term points scoring.
What there isn't is fear of big corps; Boeing will be bailed out and ADM , Monsanto, Exxon et al get to do whatever they want and essentially zero percent of the people complain. Likewise most people like their phones and other technical devices (fans, air fryers, banana slicers and internet-connected sex toys).
There's little of the existential dread that annoyingly used to be factored into most decisions. The closest today are the utterly fabricated threats of terrorism, dope fiends, and rabid invaders. Which while widely discussed only seem to bother a few margins figures.
Distrust? Sort of. People like to whine about Facebook, but they are still using it, or they migrate from Facebook to Instagram. Amazon is still growing. The news that the US government was spying on everyone came out years ago, and nobody did anything about it.
The current story is that automation will make all the dull jobs go away, and we're going to get Universal Basic Income. That's the mythic emblem for a bright, high-tech future, alive and well.
I work in AI, and nobody is stopping to ask the question: how happy are people going to be in a future without a practical sense of purpose, where a machine can do anything you could do better than you? Yes, I know, if you're a well-adapted individual, you should be able to derive your sense of purpose somewhere else besides work... Like, by making paintings that nobody will care about.
UBI utopia: everyone is free to live life to the fullest, pursue their creative passion and have fulfilling friendships in a stress-free environment.
UBI dystopia: everyone is crammed in a tiny standardized living unit, barely long enough to lie down in, the cities are all slums. Everyone feels useless and disconnected. People spend their time playing videogames, using VR porn and doing copious amounts of chemical drugs.
Oh - I agree with that; IMO the AI of today and the next 40+ years (my life, probably) is not going to be close to human equiv, or mouse equiv. But it doesn't need to be.
But AI is, today, a really useful set of new tools, which takes a lot of expertise to wield successfully. I expect that progress for the next decade will be slower than most people think, but I still think we will get some improvement over the current state of the art, and there is a lot of art to propagate and utilize.
And mouse equiv is not an issue for me. We have a lot of mice around. Same for humans. We are not short of autonomous agents with common sense, full on NLP and a strong grasp of naive physics. Goodness me, they even come with a self repairing all terrain chassis! What we are short of (right now) is agents that can make rational sense of large scale, rapidly changing, complex and incomplete data; and can make and enact decisions over said data.
There was quite an interesting take on the period by Moravec on how the hardware available had stagnated during that period (https://jetpress.org/volume1/moravec.htm). The 1 million instructions per second compares with say the $120 "Best budget graphics card AMD Radeon RX 570" today which does 5,100,000 million instructions per second so quite a difference.
>Funding improved somewhat in the early 1980s, but the number of research groups had grown, and the amount available for computers was modest. Many groups purchased Digital's new Vax computers, costing $100,000 and providing 1 MIPS. By mid-decade, personal computer workstations had appeared. Individual researchers reveled in the luxury of having their own computers, avoiding the delays of time-shared machines. A typical workstation was a Sun-3, costing about $10,000, and providing about 1 MIPS.
>By 1990, entire careers had passed in the frozen winter of 1-MIPS computers, mainly from necessity, but partly from habit and a lingering opinion that the early machines really should have been powerful enough. In 1990, 1 MIPS cost $1,000 in a low-end personal computer. There was no need to go any lower. Finally spring thaw has come. Since 1990...
There is some frighteningly prophetic speculation here, and not just about AI:
"Governments can use large databases to violate people’s privacy and to harass them. For that matter, credit card companies can do that, too. One can even envision a natural language system that monitors telephone conversations."
> Now the police dreams that one look at the gigantic map on the office wall should suffice at any given moment to establish who is related to whom and in what degree of intimacy; and, theoretically, this dream is not unrealizable although its technical execution is bound to be somewhat difficult. If this map really did exist, not even memory would stand in the way of the totalitarian claim to domination; such a map might make it possible to obliterate people without any traces, as if they had never existed at all.
I guess one could also call that "map" a social graph, right? That was written by Hannah Arendt in "The Origins of Totalitarianism", in 1951.
Sebastian Haffner in "Germany: Jekyll & Hyde (1939 - Deutschland von innen betrachtet)" (1940), page 162, rough translation by me:
> Another circumstance has to be mentioned, which was favourable for the Nazis and their terribly powerful apparatus of oppression: the development of modern technology creates an advantage for the rulers over the ruled, which hasn't been sufficiently understood for a long time. [..] The Bastille could not be successfully stormed in the age of airplanes and tear gas. [..] Transportation has lead to countries becoming small and can be easily surveilled. How many hideouts existed in a country a hundred years ago! Any [amount of] power ran up against its natural limitations. Today there is nowhere to hide for the rebel. Even the thoughts, which can pass through walls, can be "controlled", because they are tied to the distribution of news en masse, to radio, film and press. How long until every house has its own microphone and every private word, like today every telephone conversation, can be listened in on? The ant state is at hand. Perhaps it is no coincidence that states like Germany and Russia have raised technology to the rank of a religion. At the same time this development makes the conservation of freedom a task for humanity that has more urgency than ever. But this is besides the current topic.
For the sake of completeness (translating stuff in a hurry never feels quite right) the German original:
> Noch ein weiterer Umstand muß erwähnt werden, der sich für die Nazis und ihren ungeheuer mächtigen Unterdrückungsapparat als günstig erweist: Die Entwicklung der modernen Technik verschafft den Herrschenden, wie man lange ungenügend verstanden hat, einen Vorteil gegenüber den Beherrschten. Je wirksamer die Waffen werden und je weniger man sich gegen sie schützen kann, desto mehr ist der Bewaffnete den Unbewaffneten überlegen. Die Bastille könnte im Zeitalter der Flugzeuge und des Tränengases nicht erfolgreich erstürmt werden. Mit Gewehren ausgerüstete Bürgerwehren haben keine Chance mehr gegen motorisierte Polizeitruppen; es hat keinen Sinn, Barrikaden gegen eine Regierung zu errichten, die über Panzer verfügt. Und nicht nur die Waffenentwicklung begünstigt im Falle einer Revolution die Machthaber, den Staat gegenüber den einzelnen: Die moderne technische Entwicklung und die damit einhergehende ausgeklügelte Organisation wirken in der gleichen Richtung. Der Verkehr hat dazu geführt, daß die Länder klein geworden sind und sich leicht überwachen lassen. Wie viele Verstecke hab es in einem Land vor hundert Jahren! Jede Macht stieß damals gegen natürliche Schranken! Heute gibt es kein Schlupfloch und keinen Schlupfwinkel mehr für den Rebellen. Selbst die Gedanken, die Mauern zu durchdringen vermögen, sind "steuerbar" geworden, da sie an die massenhafte Verbreitung von Nachrichten, an Rundfunk, Film und Presse, gebunden sind. Wie lange wird es dauern, bis jedes Haus sein eigenes Mikrofon hat und jedes private Wort, wie heute jedes Telefongespräch, abgehört werden kann? Der Ameisenstaat ist nahe. Es ist vielleicht kein Zufall, daß solche Staaten wie Deutschland und Rußland die Technik in den Rang einer Religion erhoben haben. Umgekehrt macht diese Entwicklung der modernen Technik die Bewahrung der Freiheit zu einer Menschheitsaufgabe, die dringlicher denn je ist. Aber das führt zu weit ab vom jetzigen Thema.
And here's an editorial about the AI winter that actually did happen, starting around five years after Drew McDermott's opening comment, just for that extra bit of cosmic irony:
I wonder if it's not just a question of human nature for things to go into a cycle of overhyped boom and bust. People get very excited about something, progress doesn't happen fast enough, and then they get bored and move on to something else.
I work in AI and I'm very much thinking that another AI winter could come. I find myself wondering where are all the real-world AI deployments, and how effective they are. To my knowledge, there aren't super successful AI startups out there. There are startups getting a lot of investor money for AI research, but they do not have profitable business models. The venture capitalists are going to get tired of that at some point, if they aren't already.
I think that we see what could be called “fear of missing out” of profits and benefits for AI.
I have also been working in the field since 1982. Deep learning is a great technology but should be hurt by the lack of explainability and what I would call massive technical debt of not understanding how systems work.
> "To sketch a worst case scenario, suppose that five years from now the strategic computing initiative collapses mis- erably as autonomous vehicles fail to roll. The fifth gen- eration turns out not to go anywhere, and the Japanese government immediately gets out of computing. Every startup company fails. Texas Instruments and Schlumber- ger and all other companies lose interest. And there’s a big backlash so that you can’t get money for anything con- nected with AI. Everybody hurriedly changes the names of their research projects to something else. This condition, called the “AI Winter” by some, prompted someone to ask me if “nuclear winter” were the situation where fund- ing is cut off for nuclear weapons. So that’s the worst case scenario.
> "I don’t think this scenario is very likely to happen, nor even a milder version of it."
Exactly that happened, from the autonomous vehicles not panning out to funding evaporating. I remember it well.
Then Waldrop's comment could almost have been written last week (except for the loss of Marvin, RIP), down to the confusion between computers and "AI".
This Time It's Not Different.
Edit: I actually attended this panel.